o d c oss va dat o , w c

s a so

a ed as t e

otat o

n approach, divides a data set into K subsets and builds up K

Devijver and Kittler, 1982]. Each of K models is trained on the K

(subsets) and tested on the reserved one subset, which is called a

ubset, in turn. The final performance is evaluated on all K test

of the K models. Figure 3.15 shows a 3-fold cross-validation

The K-fold cross-validation is commonly implemented using a

ch will rotate the subset of samples to be selected for training and

The other important feature of the K-fold cross-validation

is that each data point will be tested for exactly one time.

A diagram of 3-fold cross-validation. From left to middle, two folds are selected

a model. From right to middle, one fold is selected for testing.

Jackknife test [Quenouille, 1949; Bishop, 1996], which is

y used for a median- or small-sized data set, is a typical case of

d cross-validation when only one data point is reserved for testing

me. Suppose there are N data points. Rather than constructing K

sing the K-fold cross-validation, N models will be constructed

Jackknife test approach for the generalisation test of a supervised

n important benefit of the Jackknife test is its robustness because

o uncertainty when using the Jackknife test. There is no random

process regarding data division in a Jackknife test process.

mple

classification data [Ajaz and Hussain, 2015] was used for the

ation of how evaluation and generalisation are used. Suppose the